1. Talebi H., Milanfar P. NIMA: Neural image assessment. IEEE Trans. Image Process. 2018;27(8):3998–4011.
https://doi.org/10.1109/TIP.2018.2831899
2. Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 2015;115(3):211–252.
https://doi.org/10.1007/s11263-015-0816-y
3. Li D., Jiang T., Lin W., Jiang M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans. Multimedia. 2019;21(5):1221–1234.
https://doi.org/10.1109/TMM.2018.2875354
4. Chen C., Mo J., Hou J., Wu H., Liao L., Sun W. et al. TOPIQ: A top-down approach from semantics to distortions for image quality assessment. arXiv.org. 06.08.2023. Available at:
https://arxiv.org/abs/2308.03060 (accessed: 15.01.2025).
https://doi.org/10.48550/arXiv.2308.03060
5. Mittal A., Moorthy A. K., Bovik A. C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012;21(12):4695–4708.
https://doi.org/10.1109/TIP.2012.2214050
6. Mittal A., Soundararajan R., Bovik A. C. Making a “completely blind” image quality analyzer. IEEE Signal Process.Lett. 2013;22(3):209–212.
https://doi.org/10.1109/LSP.2012.2227726
7. Zhang W., Ma K., Yan J., Deng D., Wang Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2020;30(1):36–47.
https://doi.org/10.1109/TCSVT.2018.2886771
8. Ying Z., Niu H., Gupta P., Mahajan D., Ghadiyaram D., Bovik A. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA: IEEE; 2020, pp. 3575–3585.
https://doi.org/10.1109/CVPR42600.2020.00363
9. Su S., Yan Q., Zhu Y., Zhang C., Ge X., Sun J., Zhang Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA: IEEE; 2020, pp. 3664–3673.
https://doi.org/10.1109/CVPR42600.2020.00372
10. Wang J., Chan K. C. K., Loy C. C. Exploring CLIP for assessing the look and feel of images. In: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence (AAAI’23/IAAI’23/EAAI’23). Washington, DC: AAAI Press; 2023, vol. 37, pp. 2555–2563.
https://doi.org/10.1609/aaai.v37i2.25353
11. Yang S., Wu T., Shi S., Lao S., Gong Y., Cao M. et al. MANIQA: Multi-dimension attention network for no-reference image quality assessment. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). New Orleans, LA: IEEE; 2022, pp. 1191–1199.
https://doi.org/10.1109/CVPRW56347.2022.00126
12. Bosse S., Maniry D., Müller K.-R., Wiegand Th., Samek W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017;27(1):206–219.
https://doi.org/10.1109/TIP.2017.2760518
13. Wang Z., Bovik A. C., Sheikh H. R., Simoncelli E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004;13(4):600–612.
https://doi.org/10.1109/TIP.2003.819861
14. Radford A., Kim J. W., Hallacy C., Ramesh A., Goh G., Agarwal S. et al. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. S. l.: PMLR, 2021; vol. 139, pp. 8748–8763. Available at:
https://proceedings.mlr.press/v139/radford21a.html (accessed: 15.01.2025).
15. Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner Th. et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv.org. 03.06.2021. Available at:
https://arxiv.org/abs/2010.11929 (accessed: 15.01.2025).
https://doi.org/10.48550/arXiv.2010.11929
16. Ghadiyaram D., Bovik A. C. Massive online crowdsourced study of subjective and objective picture quality.IEEE Trans. Image Process. 2016;25(1):372–387.
https://doi.org/10.1109/TIP.2015.2500021
17. Lin H., Hosu V., Saupe D. KonIQ-10k: Towards an ecologically valid and large-scale IQA database. arXiv.org. 2018. Available at:
https://arxiv.org/abs/1803.08489 (accessed: 15.01.2025).
https://doi.org/10.48550/arXiv.1803.08489
18. Thomee B., Shamma D. A., Friedland G., Elizalde B., Ni K., Poland D. et al. YFCC100M: The new data in multimedia research. Commun.ACM. 2016;59(2):64–73.
https://doi.org/10.1145/2812802
19. Ponomarenko N., Jin L., Ieremeiev O., Lukin V., Egiazarian K., Astola J. et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015;30:57–77.
https://doi.org/10.1016/j.image.2014.10.009
20. Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer; 2001. XVI, 536 p.
https://doi.org/10.1007/978-0-387-21606-5
21. Scikit-learn: Machine Learning in Python. Available at:
https://scikit-learn.org/stable (accessed: 16.01.2025).
22. Prokhorenkova L., Gusev G., Vorobev A., Dorogush A. V., Gulin A. CatBoost: Unbiased boosting with categorical features. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18). Red Hook, NY: Curran Associates; 2018, pp. 6639–6649.
23. Chen T., Guestrin C. XGBoost: A scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). New York: ACM; 2016, pp. 785–794.
https://doi.org/10.1145/2939672.2939785
24. Ke G., Meng Q., Finley Th., Wang T., Chen W., Ma W. et al. LightGBM: A highly efficient gradient boosting decision tree. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Red Hook, NY: Curran Associates; 2017, pp. 3149–3157.
25. Li L., Jamieson K., DeSalvo G., Rostamizadeh A., Talwalkar A. Hyperband: A novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 2018;18:6765–6816.
26. Optuna: An open source hyperparameter optimization framework. Availableat:
https://optuna.org/ (accessed: 17.01.2025).